基于分数的分歧已被广泛用于机器学习和统计应用。尽管他们的经验成功,但在将它们用于多模式分布时仍观察到了失明问题。在这项工作中,我们讨论了失明问题,并提出了一个新的分歧家庭,可以减轻失明问题。在密度估计的背景下,我们说明了我们提出的差异,与传统方法相比,报告的性能提高了。
translated by 谷歌翻译
我们引入了综合学习,这是一个原则性的框架,将弱监督集成到机器学习模型的培训过程中。我们的方法共同训练末端模型和标签模型,该模型汇总了多个弱监督源。我们介绍了一个标签模型,该模型可以学会以不同的数据点的方式汇总弱监督源,并考虑训练期间终端模型的性能。我们表明,我们的方法在一组6个基准分类数据集中优于现有的弱学习技术。当出现少量标记的数据和弱监督时,性能的提高既一致又大,并且可靠地获得了2-5点测试F1分数在非整合方法中获得的增长。
translated by 谷歌翻译
基于密度的分布(OOD)检测最近显示了检测OOD图像的任务不可靠。基于各种密度比的方法实现了良好的经验性能,但是方法通常缺乏原则性的概率建模解释。在这项工作中,我们建议在建立基于能量的模型并采用不同基础分布的新框架下统一基于密度比的方法。在我们的框架下,密度比可以看作是隐式语义分布的非均衡密度。此外,我们建议通过类比率估计直接估计数据样本的密度比。与最近的工作相比,我们报告了有关OOD图像问题的竞争结果,这些工作需要对任务进行深层生成模型的培训。我们的方法使一个简单而有效的途径可以解决OOD检测问题。
translated by 谷歌翻译
最近提出的基于局部自回旋模型的神经局部无损压缩(NELLOC)已在图像压缩任务中实现了最新的(SOTA)过度分布(OOD)概括性能。除了鼓励OOD泛化外,局部模型还允许在解码阶段并行推断。在本文中,我们提出了两种平行化方案,用于本地自回归模型。我们讨论实施方案的实用性,并提供了与以前的非平行实施相比,压缩运行时获得显着增长的实验证据。
translated by 谷歌翻译
持续学习旨在从动态数据分布中学习一系列任务。如果不访问旧培训样本,难以确定的旧任务从旧任务转移,这可能是正面或负面的。如果旧知识干扰了新任务的学习,即,前瞻性知识转移是消极的,那么精确地记住旧任务将进一步加剧干扰,从而降低持续学习的性能。相比之下,通过调节学习触发的突触膨胀和突触收敛,生物神经网络可以积极忘记与新经验的学习冲突的旧知识。灵感来自于生物积极的遗忘,我们建议积极忘记限制新任务的学习以努力学习的旧知识。在贝叶斯持续学习的框架下,我们开发了一种名为积极遗忘的新方法,突触扩张 - 收敛(AFEC)。我们的方法动态扩展参数以了解每项新任务,然后选择性地结合它们,这与生物积极遗忘的底层机制正式一致。我们广泛地评估AFEC在各种持续的学习基准上,包括CIFAR-10回归任务,可视化分类任务和Atari加强任务,其中Afec有效提高了新任务的学习,并在插头中实现了最先进的性能 - 游戏方式。
translated by 谷歌翻译
分发(OOD)检测和无损压缩构成了两个问题,可以通过对第一个数据集的概率模型进行训练来解决,其中在第二数据集上的后续似然评估,其中数据分布不同。通过在可能性方面定义概率模型的概括,我们表明,在图像模型的情况下,泛展能力通过本地特征主导。这激励了我们对本地自回归模型的提议,该模型专门为局部图像特征而达到改善的性能。我们将拟议的模型应用于检测任务,并在未引入其他数据的情况下实现最先进的无监督的检测性能。此外,我们使用我们的模型来构建新的无损图像压缩机:Nelloc(神经本地无损压缩机)和报告最先进的压缩率和模型大小。
translated by 谷歌翻译
For distributions $\mathbb{P}$ and $\mathbb{Q}$ with different supports or undefined densities, the divergence $\textrm{D}(\mathbb{P}||\mathbb{Q})$ may not exist. We define a Spread Divergence $\tilde{\textrm{D}}(\mathbb{P}||\mathbb{Q})$ on modified $\mathbb{P}$ and $\mathbb{Q}$ and describe sufficient conditions for the existence of such a divergence. We demonstrate how to maximize the discriminatory power of a given divergence by parameterizing and learning the spread. We also give examples of using a Spread Divergence to train implicit generative models, including linear models (Independent Components Analysis) and non-linear models (Deep Generative Networks).
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译